28/07/2020

Our project

We decided to do central Colombia, basically because it is where the capital is.

We built a model for the number of confirmed cases using all the others covariates (plus some we created) and we estimated the predictive accuracy of our selected model.

We decided to consider as central Colombia the following departments/districts: Bogotà DC, Boyacá, Tolima, Cundinamarca, Meta, Quindío, Valle del Cauca, Risaralda, Celdas, Boyacá, Antioquia, Santander, Casanare.

Loading the dataset

colombia_covid <- as.data.frame(read_csv("data/datasets_567855_1056808_Casos1.csv"))
colnames(colombia_covid)[5] <- "Atención"
colnames(colombia_covid)[8] <- "Tipo"
# slicing the main dataset
central.colombia.dep <- c("Bogotá D.C.", "Tolima", "Cundinamarca", "Meta", "Boyacá", "Quindío", "Cauca",
    "Valle del Cauca", "Risaralda", "Caldas", "Boyacá", "Antioquia", "Santander", "Casanare")
central.colombia.rows <- which(colombia_covid$`Departamento o Distrito` %in% central.colombia.dep)
colombia_covid <- colombia_covid[central.colombia.rows, ]

Description of variables

ID de caso: ID of the confirmed case.

Fecha de diagnóstico: Date in which the disease was diagnosed.

Ciudad de ubicación: City where the case was diagnosed.

Departamento o Distrito: Department or district where the city belongs to.

Atención: Situation of the patient: recovered, at home, at the hospital, at the ICU or deceased.

Edad: Age of the confirmed case.

Sexo: Sex of the confirmed case.

Tipo: How the person got infected: in Colombia, abroad or unknown.

País de procedencia: Country of origin if the person got infected abroad.

Map

Here we can see our selected cities. The color of the pins is related with the number of cases: if they are less than \(10\) the color is “green”, if they are less than \(100\) the color is “orange”, otherwise it is “red”.

Preprocessing

We had to clean the dataset:

  • We transformed the Fecha de diagnóstico variable into a Date type variable,

  • we fixed the variable Id de caso (some rows were missing so the numbers weren’t consecutive),

  • we created a variable Grupo de edad,

  • we cleaned the column País de procedencia (replaced cities with the country) and created the variable Continente de procedencia (as the first is too fragmented we thought to consider the continents).

##   ID de caso Fecha de diagnóstico Ciudad de ubicación Departamento o Distrito
## 1          1           2020-03-06              Bogotá             Bogotá D.C.
## 2          2           2020-03-09                Buga         Valle del Cauca
## 3          3           2020-03-09            Medellín               Antioquia
##     Atención Edad Sexo      Tipo País de procedencia Grupo de edad
## 1 Recuperado   19    F Importado              Italia         19_30
## 2 Recuperado   34    M Importado              España         31_45
## 3 Recuperado   50    F Importado              España         46_60
##   Continente de procedencia
## 1                    Europa
## 2                    Europa
## 3                    Europa

New dataset I

##          Date Elapsed time New cases/day Cumulative cases
## 1  2020-03-06            0             1                1
## 2  2020-03-09            3             2                3
## 3  2020-03-11            5             5                8
## 4  2020-03-12            6             2               10
## 5  2020-03-13            7             3               13
## 6  2020-03-14            8             8               21
## 7  2020-03-15            9            13               34
## 8  2020-03-16           10            12               46
## 9  2020-03-17           11            12               58
## 10 2020-03-18           12            24               82

New dataset II

##          Date Elapsed time      Department New cases/day
## 1  2020-03-06            0     Bogotá D.C.             1
## 2  2020-03-09            3       Antioquia             1
## 3  2020-03-09            3 Valle del Cauca             1
## 4  2020-03-11            5       Antioquia             3
## 5  2020-03-11            5     Bogotá D.C.             2
## 6  2020-03-12            6     Bogotá D.C.             2
## 7  2020-03-13            7     Bogotá D.C.             1
## 8  2020-03-13            7            Meta             1
## 9  2020-03-13            7 Valle del Cauca             1
## 10 2020-03-14            8       Antioquia             3
##    Cumulative cases/Department
## 1                            1
## 2                            1
## 3                            1
## 4                            4
## 5                            3
## 6                            5
## 7                            6
## 8                            1
## 9                            2
## 10                           7
##          Date Cumulative cases/Department Elapsed time   Department
## 1  2020-03-09                           1            3    Antioquia
## 2  2020-03-11                           4            5    Antioquia
## 16 2020-04-02                         127           27    Antioquia
## 17 2020-03-06                           1            0  Bogotá D.C.
## 18 2020-03-11                           3            5  Bogotá D.C.
## 40 2020-04-02                         542           27  Bogotá D.C.
## 41 2020-03-15                           1            9 Cundinamarca
## 42 2020-03-16                           2           10 Cundinamarca
## 54 2020-04-01                          42           26 Cundinamarca
## 55 2020-03-15                           1            9    Risaralda
##    New cases/day
## 1              1
## 2              3
## 16            20
## 17             1
## 18             2
## 40            70
## 41             1
## 42             1
## 54             4
## 55             1

Exploring the dataset

Other plots

Here the growth seems exponential (and this is consistent with the fact that we are studying the early stages of the outbreak).

brks <- seq(-250, 250, 50)
lbls <- as.character(c(seq(-250, 0, 50), seq(50, 250, 50)))

ggplot(data=colombia_covid, aes(x=`Departamento o Distrito`, fill = Sexo)) +  
                              geom_bar(data = subset(colombia_covid, Sexo == "F")) +
                              geom_bar(data = subset(colombia_covid, Sexo == "M"), aes(y=..count..*(-1))) + 
                              scale_y_continuous(breaks = brks,
                                               labels = lbls) + 
                              coord_flip() +  
                              labs(title="Spread of the desease across genders",
                                   y = "Number of cases",
                                   x = "Department",
                                   fill = "Gender") +
                              theme_tufte() +  
                              theme(plot.title = element_text(hjust = .5), 
                                    axis.ticks = element_blank()) +   
                              scale_fill_brewer(palette = "Dark3")  

#compute percentage so that we can label more precisely the pie chart
age_groups_pie <- colombia_covid %>% 
  group_by(`Grupo de edad`) %>%
  count() %>%
  ungroup() %>%
  mutate(per=`n`/sum(`n`)) %>% 
  arrange(desc(`Grupo de edad`))
age_groups_pie$label <- scales::percent(age_groups_pie$per)

age_pie <- ggplot(age_groups_pie, aes(x = "", y = per, fill = factor(`Grupo de edad`))) + 
  geom_bar(stat="identity", width = 1) +
  theme(axis.line = element_blank(), 
        plot.title = element_text(hjust=0.5)) + 
  labs(fill="Age groups", 
       x=NULL, 
       y=NULL, 
       title="Distribution of the desease across ages") +
  coord_polar(theta = "y") +
  #geom_text(aes(x=1, y = cumsum(per) - per/2, label=label))
  geom_label_repel(aes(x=1, y=cumsum(per) - per/2, label=label), size=3, show.legend = F, nudge_x = 0) +
  guides(fill = guide_legend(title = "Group"))
  
age_pie 

Age-Sex plot

Tipo plot

theme_set(theme_classic())

ggplot(colombia_covid, aes(x = `Fecha de diagnóstico`)) +
  scale_fill_brewer(palette = "Set3") +
  geom_histogram(aes(fill=Tipo), width = 0.8, stat="count") +
  theme(axis.text.x = element_text(angle=65, vjust=0.6)) +
  labs(title = "Daily number of confirmed cases", 
       subtitle = "subdivided across type",
       x = "Date of confirmation",
       fill = "Type")

Tipo

I think that en estudio means that it is not clear while the case is imported or not, however it seems like there are more imported cases, we can count them:

type_pie <- colombia_covid %>% 
  group_by(Tipo) %>%
  count() %>%
  ungroup() %>%
  mutate(per=`n`/sum(`n`)) %>% 
  arrange(desc(Tipo))
type_pie$label <- scales::percent(type_pie$per)
type_pie<-type_pie[names(type_pie)!="per"]
colnames(type_pie)<-c("Tipo", "Total number", "Percentage")
type_pie
## # A tibble: 3 x 3
##   Tipo        `Total number` Percentage
##   <fct>                <int> <chr>     
## 1 Relacionado            291 29.3%     
## 2 Importado              467 47.0%     
## 3 En estudio             235 23.7%

Continent

Now let’s plot a pie chart to be able to see the distribution of cases across the continents.

The majority of the cases in the country are people that got infected inside Colombia. Then, people that contracted the disease abroad came mainly from Europe, followed by North America and Central America.

The frequentist approach

Poisson with Elapsed time as predictor

#Running poisson with just the variable representing the time as predictor 
poisson1 <- glm(`Cumulative cases` ~ `Elapsed time`, data=cases, family=poisson)
pred.pois <- poisson1$fitted.values
res.st <- (cases$`Cumulative cases` - pred.pois)/sqrt(pred.pois)
#n=25 
#k=2
#n-k=23
print(paste("Estimated overdispersion", sum(res.st^2)/23))
## [1] "Estimated overdispersion 11.0591714112992"
paste("AIC:", poisson1$aic)
## [1] "AIC: 446.827929331767"
paste(c("Null deviance: ", "Residual deviance:"),
       round(c(poisson1$null.deviance, deviance(poisson1)), 2))
## [1] "Null deviance:  7859.81"   "Residual deviance: 280.62"
par(mfrow=c(2,2))
plot(poisson1)

Angela’s attempt

#Running poisson with just the variable representing the time as predictor 
poisson1A <- glm(`Cumulative cases/Department` ~ `Elapsed time`, data=cases_relev_dep, family=poisson)
summary(poisson1A)
## 
## Call:
## glm(formula = `Cumulative cases/Department` ~ `Elapsed time`, 
##     family = poisson, data = cases_relev_dep)
## 
## Deviance Residuals: 
##      Min        1Q    Median        3Q       Max  
## -13.9994   -4.7902   -2.1236    0.8586   16.3553  
## 
## Coefficients:
##                Estimate Std. Error z value Pr(>|z|)    
## (Intercept)    0.982659   0.062709   15.67   <2e-16 ***
## `Elapsed time` 0.167306   0.002779   60.20   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for poisson family taken to be 1)
## 
##     Null deviance: 8732.6  on 82  degrees of freedom
## Residual deviance: 3855.1  on 81  degrees of freedom
## AIC: 4285.4
## 
## Number of Fisher Scoring iterations: 5

New

poisson1A <- glm(`New cases/day` ~ `Elapsed time`, data=cases, family=poisson)
#print(paste("Estimated overdispersion", sum(res.st^2)/23))
paste("AIC:", poisson1A$aic)
## [1] "AIC: 317.731647373115"
paste(c("Null deviance: ", "Residual deviance:"),
       round(c(poisson1A$null.deviance, deviance(poisson1A)), 2))
## [1] "Null deviance:  870.39"    "Residual deviance: 191.78"
par(mfrow=c(2,2))
plot(poisson1A)

Poisson with time plus gender

poisson2 <- glm(`Cumulative cases` ~ `Elapsed time` + Sexo_M, data=data1, family=poisson)
paste("AIC:", poisson2$aic)
## [1] "AIC: 448.106201285854"
paste(c("Null deviance: ", "Residual deviance:"),
       round(c(poisson2$null.deviance, deviance(poisson2)), 2))
## [1] "Null deviance:  7859.81"  "Residual deviance: 279.9"
par(mfrow=c(2,2))
plot(poisson2)

Poisson with Elapsed time plus Group de edad

poisson3 <- glm(`Cumulative cases` ~ `Elapsed time` + `Grupo de edad_19_30` + `Grupo de edad_31_45` + `Grupo de edad_46_60` + `Grupo de edad_60_75` + `Grupo de edad_76+`, data=data1, family=poisson)
pred.pois3 <- poisson3$fitted.values
res.st3 <- (data1$`Cumulative cases` - pred.pois3)/sqrt(pred.pois3)
#n=25 
#k=7
#n-k=18
print(paste("Estimated overdispersion", est.overdispersion <- sum(res.st3^2)/18))
## [1] "Estimated overdispersion 10.001532070592"
paste("AIC:", poisson3$aic)
## [1] "AIC: 376.950305422531"
paste(c("Null deviance: ", "Residual deviance:"),
       round(c(poisson3$null.deviance, deviance(poisson3)), 2))
## [1] "Null deviance:  7859.81"   "Residual deviance: 200.75"
par(mfrow=c(2,2))
plot(poisson3)

Poisson with Elapsed time, Age and ’Departments` as predictors

poisson4 <- glm(`Cumulative cases` ~ `Elapsed time` + `Grupo de edad_19_30` + `Grupo de edad_31_45` + `Grupo de edad_46_60` + `Grupo de edad_60_75` + `Grupo de edad_76+` + `Departamento o Distrito_Bogotá D.C.` + `Departamento o Distrito_Boyacá` + `Departamento o Distrito_Caldas` + `Departamento o Distrito_Casanare` + `Departamento o Distrito_Cauca` + `Departamento o Distrito_Cundinamarca` + `Departamento o Distrito_Meta` + `Departamento o Distrito_Quindío` + `Departamento o Distrito_Risaralda` + `Departamento o Distrito_Santander` + `Departamento o Distrito_Tolima` + `Departamento o Distrito_Valle del Cauca`, data=data1, family=poisson)
par(mfrow=c(2,2))
plot(poisson4)

pred.pois4 <- poisson4$fitted.values
res.st4 <- (data1$`Cumulative cases` - pred.pois4)/sqrt(pred.pois4)
#n=25 
#k=19
#n-k=6
print(paste("Estimated overdispersion", est.overdispersion <- sum(res.st4^2)/6))
## [1] "Estimated overdispersion 0.516840130836562"
paste("AIC:", poisson4$aic)
## [1] "AIC: 203.506208494973"
paste(c("Null deviance: ", "Residual deviance:"),
       round(c(poisson4$null.deviance, deviance(poisson4)), 2))
## [1] "Null deviance:  7859.81" "Residual deviance: 3.3"

Poisson with Elapsed time, Age, Departments and Continent of origin as predictors

#poisson5 <- glm(`Cumulative cases` ~ `Elapsed time` + `Grupo de edad_19_30` + `Grupo de edad_31_45` + `Grupo de edad_46_60` + `Grupo de edad_60_75` + `Grupo de edad_76+` + `Departamento o Distrito_Bogotá D.C.`+`Departamento o Distrito_Boyacá`+`Departamento o Distrito_Caldas` + `Departamento o Distrito_Casanare`+`Departamento o Distrito_Cauca`+`Departamento o Distrito_Cundinamarca`+`Departamento o Distrito_Meta`+`Departamento o Distrito_Quindío`+`Departamento o Distrito_Risaralda`+`Departamento o Distrito_Santander`+`Departamento o Distrito_Tolima`+`Departamento o Distrito_Valle del Cauca`+`Continente de procedencia_Asia`+`Continente de procedencia_Centroamérica`+`Continente de procedencia_Colombia`+`Continente de procedencia_Europa`+`Continente de procedencia_Norteamérica` + `Continente de procedencia_Sudamerica`, data=data1, family=poisson)
#par(mfrow=c(2,2))
#plot(poisson5)
#paste("AIC:", poisson5$aic)
#paste(c("Null deviance: ", "Residual deviance:"),
#       round(c(poisson5$null.deviance, deviance(poisson5)), 2))

ANOVA to compare the Poisson models

#anova(poisson1, poisson3, poisson4, poisson5, test="Chisq")
anova(poisson1, poisson3, poisson4, test="Chisq")
## Analysis of Deviance Table
## 
## Model 1: `Cumulative cases` ~ `Elapsed time`
## Model 2: `Cumulative cases` ~ `Elapsed time` + `Grupo de edad_19_30` + 
##     `Grupo de edad_31_45` + `Grupo de edad_46_60` + `Grupo de edad_60_75` + 
##     `Grupo de edad_76+`
## Model 3: `Cumulative cases` ~ `Elapsed time` + `Grupo de edad_19_30` + 
##     `Grupo de edad_31_45` + `Grupo de edad_46_60` + `Grupo de edad_60_75` + 
##     `Grupo de edad_76+` + `Departamento o Distrito_Bogotá D.C.` + 
##     `Departamento o Distrito_Boyacá` + `Departamento o Distrito_Caldas` + 
##     `Departamento o Distrito_Casanare` + `Departamento o Distrito_Cauca` + 
##     `Departamento o Distrito_Cundinamarca` + `Departamento o Distrito_Meta` + 
##     `Departamento o Distrito_Quindío` + `Departamento o Distrito_Risaralda` + 
##     `Departamento o Distrito_Santander` + `Departamento o Distrito_Tolima` + 
##     `Departamento o Distrito_Valle del Cauca`
##   Resid. Df Resid. Dev Df Deviance  Pr(>Chi)    
## 1        23    280.624                          
## 2        18    200.747  5   79.878 8.901e-16 ***
## 3         6      3.303 12  197.444 < 2.2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Quasi Poisson with Elapsed time as predictor

poisson1quasi <- glm(`Cumulative cases` ~ `Elapsed time`, data=cases, family=quasipoisson)
par(mfrow=c(2,2))
plot(poisson1quasi)

pred.poisq <- poisson1quasi$fitted.values
res.stq <- (data1$`Cumulative cases` - pred.poisq)/sqrt(summary(poisson1quasi)$dispersion*pred.poisq)
print(paste("Estimated overdispersion", sum(res.stq^2)/23))
## [1] "Estimated overdispersion 0.999988649853625"
paste("AIC:", poisson1quasi$aic)
## [1] "AIC: NA"
paste(c("Null deviance: ", "Residual deviance:"),
       round(c(poisson1quasi$null.deviance, deviance(poisson1quasi)), 2))
## [1] "Null deviance:  7859.81"   "Residual deviance: 280.62"

Quasi Poisson with Elapsed time and Age as predictor

#Let's apply a quasi poisson and see what happens
poisson2quasi <- glm(`Cumulative cases` ~ `Elapsed time` + `Grupo de edad_19_30` + `Grupo de edad_31_45` + `Grupo de edad_46_60` + `Grupo de edad_60_75` + `Grupo de edad_76+`, data=data1, family=quasipoisson)
par(mfrow=c(2,2))
plot(poisson1quasi)

pred.poisq2 <- poisson2quasi$fitted.values
res.stq2 <- (data1$`Cumulative cases` - pred.poisq2)/sqrt(summary(poisson2quasi)$dispersion*pred.poisq2)
print(paste("Estimated overdispersion", sum(res.stq2^2)/18))
## [1] "Estimated overdispersion 0.999984520837828"
paste("AIC:", poisson2quasi$aic)
## [1] "AIC: NA"
paste(c("Null deviance: ", "Residual deviance:"),
       round(c(poisson2quasi$null.deviance, deviance(poisson2quasi)), 2))
## [1] "Null deviance:  7859.81"   "Residual deviance: 200.75"

Negative Binomial with Elapsed time as predictor

nb1 <- glm.nb(`Cumulative cases` ~ `Elapsed time`, data=data1)
par(mfrow=c(2,2))
plot(nb1)

#n=25, k=2, n-k=23
stdres <- rstandard(nb1)
print(paste("Estimated overdispersion", sum(stdres^2)/23))
## [1] "Estimated overdispersion 1.33483954328732"
paste(c("Null deviance: ", "Residual deviance:"),
       round(c(nb1$null.deviance, deviance(nb1)), 2))
## [1] "Null deviance:  477.96"   "Residual deviance: 27.85"

Negative Binomial with Elapsed time plus Age as predictors

nb2 <- glm.nb(`Cumulative cases` ~ `Elapsed time` + `Grupo de edad_19_30` + `Grupo de edad_31_45` + `Grupo de edad_46_60` + `Grupo de edad_60_75` + `Grupo de edad_76+`, data=data1)
par(mfrow=c(2,2))
plot(nb2)

paste(c("Null deviance: ", "Residual deviance:"),
       round(c(nb2$null.deviance, deviance(nb2)), 2))
## [1] "Null deviance:  526.5"    "Residual deviance: 28.15"

Negative Binomial with Elapsed time plus Department as predictors

nb3 <- glm.nb(`Cumulative cases` ~ `Elapsed time` + `Departamento o Distrito_Bogotá D.C.` + `Departamento o Distrito_Boyacá`+`Departamento o Distrito_Caldas`+`Departamento o Distrito_Casanare`+`Departamento o Distrito_Cauca`+`Departamento o Distrito_Cundinamarca`+`Departamento o Distrito_Meta`+`Departamento o Distrito_Quindío`+`Departamento o Distrito_Risaralda`+`Departamento o Distrito_Santander`+`Departamento o Distrito_Tolima`+`Departamento o Distrito_Valle del Cauca`, data=data1)
par(mfrow=c(2,2))
plot(nb3)

paste(c("Null deviance: ", "Residual deviance:"),
       round(c(nb3$null.deviance, deviance(nb3)), 2))
## [1] "Null deviance:  2214.86"  "Residual deviance: 26.19"

Negative Binomial with Elapsed time plus Continent of origin as predictors

nb4 <- glm.nb(`Cumulative cases` ~ `Elapsed time` + `Continente de procedencia_Asia`+`Continente de procedencia_Centroamérica`+`Continente de procedencia_Colombia`+`Continente de procedencia_Europa`+`Continente de procedencia_Norteamérica`+`Continente de procedencia_Sudamerica`, data=data1)
par(mfrow=c(2,2))
plot(nb4)

paste(c("Null deviance: ", "Residual deviance:"),
       round(c(nb4$null.deviance, deviance(nb4)), 2))
## [1] "Null deviance:  864.03"  "Residual deviance: 23.8"

Negative Binomial with Elapsed time, Age and Departments as pedictors

nb5 <- glm.nb(`Cumulative cases` ~ `Elapsed time` + `Grupo de edad_19_30` + `Grupo de edad_31_45` + `Grupo de edad_46_60` + `Grupo de edad_60_75` + `Grupo de edad_76+` + `Departamento o Distrito_Bogotá D.C.`+`Departamento o Distrito_Boyacá`+`Departamento o Distrito_Caldas`+`Departamento o Distrito_Casanare`+`Departamento o Distrito_Cauca`+`Departamento o Distrito_Cundinamarca`+`Departamento o Distrito_Meta`+`Departamento o Distrito_Quindío`+`Departamento o Distrito_Risaralda`+`Departamento o Distrito_Santander`+`Departamento o Distrito_Tolima`+`Departamento o Distrito_Valle del Cauca`, data=data1)
par(mfrow=c(2,2))
plot(nb5)

# Calculating overdispersion n=25 k=19 n-k=6
stdres <- rstandard(nb5)
print(paste("Estimated overdispersion", sum(stdres^2)/6))
## [1] "Estimated overdispersion 3.85395829427517"
paste(c("Null deviance: ", "Residual deviance:"),
       round(c(nb5$null.deviance, deviance(nb5)), 2))
## [1] "Null deviance:  7858.91" "Residual deviance: 3.3"

Applying ANOVA to compare the negative binomial models

We decided to compare nb1, nb2, nb5, because they are nested and we are more interested in seeing if the fifth model is in fact better than the first model.

#Applying ANOVA to compare the negative binomial models
anova(nb1, nb2, nb5)
## Likelihood ratio tests of Negative Binomial Models
## 
## Response: Cumulative cases
##                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      Model
## 1                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             Elapsed time
## 2                                                                                                                                                                                                                                                                                                                                                                                                                                                                     `Elapsed time` + `Grupo de edad_19_30` + `Grupo de edad_31_45` + `Grupo de edad_46_60` + `Grupo de edad_60_75` + `Grupo de edad_76+`
## 3 `Elapsed time` + `Grupo de edad_19_30` + `Grupo de edad_31_45` + `Grupo de edad_46_60` + `Grupo de edad_60_75` + `Grupo de edad_76+` + `Departamento o Distrito_Bogotá D.C.` + `Departamento o Distrito_Boyacá` + `Departamento o Distrito_Caldas` + `Departamento o Distrito_Casanare` + `Departamento o Distrito_Cauca` + `Departamento o Distrito_Cundinamarca` + `Departamento o Distrito_Meta` + `Departamento o Distrito_Quindío` + `Departamento o Distrito_Risaralda` + `Departamento o Distrito_Santander` + \n    `Departamento o Distrito_Tolima` + `Departamento o Distrito_Valle del Cauca`
##          theta Resid. df    2 x log-lik.   Test    df  LR stat.      Pr(Chi)
## 1 1.125073e+01        23       -253.9080                                    
## 2 1.257832e+01        18       -251.9442 1 vs 2     5  1.963751 8.541378e-01
## 3 2.453680e+06         6       -165.5091 2 vs 3    12 86.435145 2.409184e-13

Predictive accuracy of the Poisson model

Predicting with a \(95\%\) confidence interval

Predictive accuracy of the Negative Binomial model

Predicting with a \(95\%\) confidence interval

The Bayesian approach

Poisson regression

As a first attempt, we fit a simple Poisson regression:

\[ ln(\lambda_i) = \alpha + \beta\cdot elapsed\_time_i \\ y_i \sim \mathcal{Poisson}(\lambda_i) \\ \alpha \sim \mathcal{N}(0,1) \\ \beta \sim \mathcal{N}(0.25,1) \]

with \(i = 1,\dots,134\), being \(134\) the number of rows of our dataset, and \(y_i\) represents the number of cases.

For what concerns the stan program, we used the function poisson_log_rng to describe the distribution of \(y_i\), namely the number of cases each day and the function poisson_log_lpmf to specify the likelihood.

Posterior predictive check

y_rep<-as.matrix(fit1, pars="y_rep")
ppc_dens_overlay(y = model.data$cases, y_rep[1:200,]) 

poisson_posterior-1

The fit is not satisfactory, it is probably due to overdispersion, we can check the residuals to confirm this hypothesis.

Residual check

#in this way we check the standardized residuals
mean_y_rep<-colMeans(y_rep)
std_residual<-(model.data$cases - mean_y_rep) / sqrt(mean_y_rep)
qplot(mean_y_rep, std_residual) + hline_at(2) + hline_at(-2)

first_residual-1

The variance of the residuals increases as the predicted value increase. The standardized residuals should have mean 0 and standard deviation 1 (hence the lines at \(+2\) and \(-2\) indicates approximate \(95\%\) error bounds).

The plot of the standardized residuals indicates a large amount of overdispersion.

Classically the problem of having overdispersed data is solved using the negative binomial model instead of the Poisson’s one.

Negative Binomial model

We try to improve the previous model using the Negative Binomial model:

\[ ln(\lambda_i) = \alpha + \beta\cdot elapsed\_time_i \\ y_i \sim \mathcal{Negative Binomial}(\lambda_i, \phi) \\ \alpha \sim \mathcal{N}(0,1) \\ \beta \sim \mathcal{N}(0.25,1) \]

Where the parameter \(\phi\) is called precision and it is such that:

\[ E[y_i] = \lambda_i \\ Var[y_i] = \lambda_i + \frac{\lambda_i^2}{\phi} \]

again \(i=1,\dots,134\). As \(\phi \rightarrow \infty\) the negative binomial approaches the Poisson distribution.

The stan function that we use here are neg_binomial_2_log_rng to specify the distribution of \(y_i\) and the function neg_binomial_2_log_lpmf for the likelihood.

Posterior predictive check

samples_NB<-rstan::extract(fit2)
y_rep<-samples_NB$y_rep
ppc_dens_overlay(y = model.data$cases, y_rep[1:200,]) 

NB_posterior-1

Residual check

mean_inv_phi<-mean(samples_NB$inv_phi)
mean_y_rep<-colMeans(y_rep)
std_residual<-(model.data$cases - mean_y_rep) / sqrt(mean_y_rep + mean_y_rep^2*mean_inv_phi)
qplot(mean_y_rep, std_residual) + hline_at(2) + hline_at(-2)

NB_residuals.png

The situation is better now, but still we have too many residuals outside the \(95\%\) interval.

Accuracy across departments

ppc_stat_grouped(
  y = model.data$cases,
  yrep = y_rep,
  group = cases_dep$Department,
  stat = "mean",
  binwidth = 0.2
)

NB_deps

We should take into account the differences across departments.

Multilevel Negative Binomial regression

We try to fit the following model, which also includes Age as covariat:

\[ ln(\lambda_i) = \alpha + \beta_{time}\cdot elapsed\_time_i + \beta_{age}\cdot age \\ y_i \sim \mathcal{Negative Binomial}(\lambda_i, \phi) \\ \alpha \sim \mathcal{N}(0,1) \\ \beta_{time} \sim \mathcal{N}(0.5,1) \\ \beta_{age} \sim \mathcal{N}(0,1) \]

Posterior predictive check

samples_NB2<-rstan::extract(fit3)
y_rep<-samples_NB2$y_rep
ppc_dens_overlay(y = model.data2$cases, y_rep[1:200,]) 

NB2_posterior

Residual check

mean_inv_phi<-mean(samples_NB2$inv_phi)
mean_y_rep<-colMeans(y_rep)
std_residual<-(model.data2$cases - mean_y_rep) / sqrt(mean_y_rep + mean_y_rep^2*mean_inv_phi)
qplot(mean_y_rep, std_residual) + hline_at(2) + hline_at(-2)

NB2_res

Accuracy across departments

ppc_stat_grouped(
  y = model.data2$cases,
  yrep = y_rep,
  group = cases_dep$Department,
  stat = "mean",
  binwidth = 0.2
)

NB2_dep

Hierarchical model

In order to improve the fit, we fit a model with department-specific intercept term.

So the varying intercept model that we take into account is now:

\[ ln(\lambda_{i,d}) = \alpha_d + + \beta_{time}\cdot elapsed\_time_i + \beta_{age}\cdot age_i\\ \alpha_d \sim \mathcal{N}(\mu + \beta_{pop}\cdot pop_d + \beta_{sur}\cdot surface_d + \beta_{dens} \cdot density_d, \sigma_{\alpha})\\ y_i \sim \mathcal{Negative Binomial}(\lambda_{i,d}, \phi) \]

The priors used for the above model are the following:

\[ \beta_{time} \sim \mathcal{N}(0.5,1) \\ \beta_{age} \sim \mathcal{N}(0,1) \\ \psi \sim \mathcal{N}(0,1) \]

being \(\psi = [\beta_{pop}, \beta_{sur}, \beta_{dens}]\).

New dataset

We added the following covariats into the dataset:

  • people: millions of inhabitants for each region;

  • surface: \(km^3\), extent of each region;

  • density: \(\frac{people}{km^2}\), density of the population in each region.

Posterior predictive check

samples_hier<-rstan::extract(fit4)
y_rep<-samples_hier$y_rep
ppc_dens_overlay(y = model.data.hier$cases, y_rep[1:200,]) 

hier_posterior

Residual check

mean_inv_phi<-mean(samples.hier$inv_phi)
mean_y_rep<-colMeans(y_rep)
std_residual<-(model.data.hier$cases - mean_y_rep) / sqrt(mean_y_rep + mean_y_rep^2*mean_inv_phi)
qplot(mean_y_rep, std_residual) + hline_at(2) + hline_at(-2)

hier_residual

Very few points are now outside the \(95\%\) confidence interval.

Accuracy across departments

ppc_stat_grouped(
  y = model.data.hier$cases,
  yrep = y_rep,
  group = cases_dep$Department,
  stat = "mean",
  binwidth = 0.2
)

hier_deps

We can clearly see that the accuracy across the departments has significantly increased with respect to the previous models.

LOOIC

The Leave-One-Out cross validation is a method for estimating pointwise out-of-sample prediction accuracy from a fitted Bayesian model using the log-likelihood evaluated at the posterior simulation of the parameters values.

Plot the looic to compare models:

loo.all.deps<-c(loo.model.Poisson.complete[3], loo.NB.complete[3], loo.model.NB2.complete[3], loo.model.NB.hier.complete[3])

sort.loo.all.deps<- sort.int(loo.all.deps, index.return = TRUE)$x

par(xaxt="n")
plot(sort.loo.all.deps, type="b", xlab="", ylab="LOOIC", main="All departments")
par(xaxt="s")
axis(1, c(1:4), c("Poisson", "NB-sl", "NB-ml", 
                  "hier")[sort.int(loo.all.deps,
                    index.return = TRUE)$ix],
                    las=2)